Skip to content

DIUx-xView/xview3-reference

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

xview3-reference

/reference/train_reference.ipynb

The above notebook in the reference directory can get you started training the reference model with just a few scenes downloaded from xView3 Challenge website.

The notebook also provides you with numerical and visual feedback, and demonstrates how the scoring metrics used for xView3 are computed. It also provides code to visualize the performance of your model. The intent of this notebook is to (a) demonstrate an implementation for building a model on the xView3 dataset and (b) provide a set of tools that will allow you to develop your own intuition for these tasks. It is not intended to recommend a particular strategy or approach!

environment.yml

We strongly recommend using Anaconda to run the reference implementation. To set up the python environment that includes the dependencies needed to run the reference implementation code, you can run:

conda env create -f environment.yml

You may also need to add the xview3 environment that this command creates to your list of jupyter kernels. This can be done by executing:

conda activate xview3
pip install ipykernel
python -m ipykernel install --user --name xview3

Note: if you encounter an error involving ipywidgets, this can usually be fixed by executing the following:

conda activate xview3
conda install -c conda-forge ipywidgets
jupyter nbextension enable --py widgetsnbextension

/reference/metric.py

This script computes the scoring metric and outputs it to a JSON file.

When computing the scoring metric for leaderboard and on the private holdout set, the xView3 Team uses:

python ./reference/metric.py --inference_file /path/to/solver_submission.csv \
--label_file /path/to/ground_truth.csv \
--output /path/to/output.json \
--distance_tolerance 200 \
--shore_tolerance 2 \
--shore_root /path/to/shoreline/contours \
--drop_low_detect \
--costly_dist

/reference/inference.py

This script allows you to generate predictions using the trained model weights from the reference implementation.

Example usage:

python ./reference/inference.py --image_folder /home/xv3data \
--scene_ids 0157baf3866b2cf9v \
--output /home/xv3data/prediction/predict.csv \
--weights ./reference/model_weights.pth \
--chips_path /home/xv3data/chips \
--channels vh vv bathymetry

run_inference.sh

This is the entrypoint to the Docker container. Please note that the shell script takes 3 positional arguments as specified in the guidelines for submission verification. The three positional arguments are:

  1. --image_folder which is the path to the data directory.
  2. --scene_ids which is the list of the xView3 scene identifier for which you wish to run inference.
  3. --output which is the path to the prediction output filename in CSV format.

The other arguments uses by inference.py, e.g., --channels and --chips_path have been hard-coded in the shell script. You are welcome to use as many arguments for your model. However, your entrypoint should have exactly the three positional arguments listed above for submission verification purposes.

Dockerfile

Top solvers will be required to provide a container that executes their model in order to qualify for awards. This is an example of a Dockerfile meeting the required specification that supports CUDA and Miniconda. To build a Docker image and tagging it using the Dockerfile, first navigate to the root directory of this repo and then use:

docker build -t my-image-name:my-image-tag .

where the -t option names the image as my-image-name and tags it as my-image-tag. We recommend that you do not use latest tags, as these can make it difficult to identify and debug a specific container image. Instead we recommend tagging your container images with a semantic version, scripted version bump, or commit hash, so you can match your predictions to a specific commit or version of the code that produced them! For example, a my-image-name:my-image-tag combination may be xv3-reference:49a4494fe5bba36af31d6985caf17540de27bb1f.

To test your image, you can use:

time docker run --shm-size 16G --gpus=1 --mount type=bind,source=/home/xv3data,target=/on-docker/xv3data my-image-name:my-image-tag /on-docker/xv3data/ 0157baf3866b2cf9v /on-docker/xv3data/prediction/prediction.csv

The example docker run utilizes a bind mount. source= specifies the local directory; target= specifies the corresponding directory in the container. The example above mounds the local /home/xv3data directory, allowing your container read and write access. The Docker container accesses the directory at /on-docker/xv3data. The example assumes that your local root xView3 data directory is /home/xv3data and its subdirectories containing xView3 imageries have scene_id names. For example, /home/xv3data/0157baf3866b2cf9v contains imageries for scene 0157baf3866b2cf9v. Your Docker container will write the prediction CSV to /on-docker/xv3data/prediction/prediction.csv in the container which is accessible locally at /home/xv3data/prediction/prediction.csv.

The time command will give you an idea of how long it takes for your model to process a scene. Keep in mind solvers are given a maximum of 15 minutes per scene on a V100 GPU.

Reading List

If you are unfamiliar with synthetic aperture radar (SAR), machine learning on SAR, and/or illegal, unreported, and unregulated (IUU) fishing, then this list will help you get started!

Synthetic Aperture Radar

Illegal, Unreported, and Unregulated Fishing

Machine Learning on Synthetic Aperture Radar Imagery

About

Reference data processing code and model for the xView3 prize challenge.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •